2 research outputs found

    Data-Driven Audiogram Classification for Mobile Audiometry

    Get PDF
    Recent mobile and automated audiometry technologies have allowed for the democratization of hearing healthcare and enables non-experts to deliver hearing tests. The problem remains that a large number of such users are not trained to interpret audiograms. In this work, we outline the development of a data-driven audiogram classification system designed specifically for the purpose of concisely describing audiograms. More specifically, we present how a training dataset was assembled and the development of the classification system leveraging supervised learning techniques. We show that three practicing audiologists had high intra- and inter-rater agreement over audiogram classification tasks pertaining to audiogram configuration, symmetry and severity. The system proposed here achieves a performance comparable to the state of the art, but is signific

    Mining Audiograms to Improve the Interpretability of Automated Audiometry Measurements

    No full text
    Many people with hearing loss are unaware of it and do not seek benefit from available interventions such as hearing aids. This is in part due to the limited accessibility to qualified hearing healthcare providers in developing and developed countries alike. Automated audiometry, which has gained in popularity amidst the torrent of advances in telemedicine and mobile health, makes it possible to deliver basic hearing tests to remote or otherwise underserved communities at low cost. While this technology makes it possible to perform hearing assessments outside of a sound booth, many individuals administering the test are non-specialists, and thus, have a limited ability to interpret audiometric measurements and to make tailored recommendations. In this paper, we present the first steps towards the development of a flexible, supervised learning approach for the classification of audiograms in terms of their shape, severity and symmetry. More specifically, we outline our approach to building a set of non-redundant, annotation-ready audiograms from a much larger dataset. In addition, we present a Rapid Audiogram Annotation Environment (RAAE) designed specifically for the collection of audiogram annotations from a large community of expert audiologists. Preliminary results indicate that annotations provided through our environment are consistent leading to low intra-coder variability. Data gathered through the RAAE will form the basis of learning algorithms to help non-experts make better dec
    corecore